204 research outputs found

    Toward an Effective Automated Tracing Process

    Get PDF
    Traceability is defined as the ability to establish, record, and maintain dependency relations among various software artifacts in a software system, in both a forwards and backwards direction, throughout the multiple phases of the project’s life cycle. The availability of traceability information has been proven vital to several software engineering activities such as program comprehension, impact analysis, feature location, software reuse, and verification and validation (V&V). The research on automated software traceability has noticeably advanced in the past few years. Various methodologies and tools have been proposed in the literature to provide automatic support for establishing and maintaining traceability information in software systems. This movement is motivated by the increasing attention traceability has been receiving as a critical element of any rigorous software development process. However, despite these major advances, traceability implementation and use is still not pervasive in industry. In particular, traceability tools are still far from achieving performance levels that are adequate for practical applications. Such low levels of accuracy require software engineers working with traceability tools to spend a considerable amount of their time verifying the generated traceability information, a process that is often described as tedious, exhaustive, and error-prone. Motivated by these observations, and building upon a growing body of work in this area, in this dissertation we explore several research directions related to enhancing the performance of automated tracing tools and techniques. In particular, our work addresses several issues related to the various aspects of the IR-based automated tracing process, including trace link retrieval, performance enhancement, and the role of the human in the process. Our main objective is to achieve performance levels, in terms of accuracy, efficiency, and usability, that are adequate for practical applications, and ultimately to accomplish a successful technology transfer from research to industry

    Q-ESP: a QoS-compliant Security Protocol to enrich IPSec Framework

    Get PDF
    IPSec is a protocol that allows to make secure connections between branch offices and allows secure VPN accesses. However, the efforts to improve IPSec are still under way; one aspect of this improvement is to take Quality of Service (QoS) requirements into account. QoS is the ability of the network to provide a service at an assured service level while optimizing the global usage of network resources. The QoS level that a flow receives depends on a six-bit identifier in the IP header; the so-called Differentiated Services code point (DSCP). Basically, Multi-Field classifiers classify a packet by inspecting IP/TCP headers, to decide how the packet should be processed. The current IPSec standard does hardly offer any guidance to do this, because the existing IPSec ESP security protocol hides much of this information in its encrypted payloads, preventing network control devices such as routers and switches from utilizing this information in performing classification appropriately. To solve this problem, we propose a QoS-friendly Encapsulated Security Payload (Q-ESP) as a new IPSec security protocol that provides both security and QoS supports. We also present our NetBSD kernel-based implementation as well as our evaluation results of Q-ESP

    Travel time estimation in congested urban networks using point detectors data

    Get PDF
    A model for estimating travel time on short arterial links of congested urban networks, using currently available technology, is introduced in this thesis. The objective is to estimate travel time, with an acceptable level of accuracy for real-life traffic problems, such as congestion management and emergency evacuation. To achieve this research objective, various travel time estimation methods, including highway trajectories, multiple linear regression (MLR), artificial neural networks (ANN) and K –nearest neighbor (K-NN) were applied and tested on the same dataset. The results demonstrate that ANN and K-NN methods outperform linear methods by a significant margin, also, show particularly good performance in detecting congested intervals. To ensure the quality of the analysis results, set of procedures and algorithms based on traffic flow theory and test field information, were introduced to validate and clean the data used to build, train and test the different models

    Organizational Culture and Job Satisfaction: A Case of Academic Staffs at Universiti Utara Malaysia (UUM)

    Get PDF
    The main purpose of this study is to examine and gain a better understanding of the significant relationships between the dimensions of organizational culture and employees’ job satisfaction among academic staff at UUM. It was done among 135 lecturers at UUM. Data which was gathered through questionnaires was analyzed by using statistical package for social science (SPSS) software 12.0. Two types of statistic were used namely descriptive and inferential statistic. Frequency and percentage were the type of statistic descriptive and statistic inferential used were multiple regression and Pearson correlation. The result showed that no significant between (emphasis of reward and performance oriented) and job satisfaction. Result also showed that significant between (organizational supportiveness, innovation and stability and communication) and job satisfaction

    Video Game Development in a Rush: A Survey of the Global Game Jam Participants

    Full text link
    Video game development is a complex endeavor, often involving complex software, large organizations, and aggressive release deadlines. Several studies have reported that periods of "crunch time" are prevalent in the video game industry, but there are few studies on the effects of time pressure. We conducted a survey with participants of the Global Game Jam (GGJ), a 48-hour hackathon. Based on 198 responses, the results suggest that: (1) iterative brainstorming is the most popular method for conceptualizing initial requirements; (2) continuous integration, minimum viable product, scope management, version control, and stand-up meetings are frequently applied development practices; (3) regular communication, internal playtesting, and dynamic and proactive planning are the most common quality assurance activities; and (4) familiarity with agile development has a weak correlation with perception of success in GGJ. We conclude that GGJ teams rely on ad hoc approaches to development and face-to-face communication, and recommend some complementary practices with limited overhead. Furthermore, as our findings are similar to recommendations for software startups, we posit that game jams and the startup scene share contextual similarities. Finally, we discuss the drawbacks of systemic "crunch time" and argue that game jam organizers are in a good position to problematize the phenomenon.Comment: Accepted for publication in IEEE Transactions on Game

    The Effect of Financial Leverage & Systematic Risk on Stock Returns in the Amman Stock Exchange (Analytical Study – Industrial Sector)

    Get PDF
    This study aims at evaluating the relationship between stock returns in industrial companies listed on Amman Stock Exchange (ASE) and each of the systematic risk and financial leverage. Stock returns (Rit) are measured through the equation of returns divided by acquisition period. Whereas, systematic risk is measured by beta coefficient (?) using the market model, while the financial leverage (Lev) is expressed by debt ratio. Data concerning the variables of the study were collected comprising 48 industrial companies listed in Amman Stock Exchange for the period between January 2000 and December 2009. This task was accomplished in order to determine the relationship between stock returns as a dependent variable, and each of the systematic risk & financial leverage as independent variables. It should be noted that the study shows a statistically significant relationship between dependent variable and independent variables, it also found that these independent variables explain the 4.4% percentage of variation in stock returns in the industrial companies listed in Amman Stock Exchange. The results revealed by the study model were contradictory, and do not match very well with the previous studies that have been conducted on more developed stock markets. Moreover, the direction of some independent variables and it relationship with the dependent variable were different from the hypothetical relationship, given the example of the relationship between systematic risk represented by beta coefficient and stock returns. However, these results correspond very well with studies conducted on developing markets. Keywords: Returns, Systematic Risk, Financial Leverage, Amman Stock Exchange, Industrial Sector

    Annotating Privacy Policies in the Sharing Economy

    Full text link
    Applications (apps) of the Digital Sharing Economy (DSE), such as Uber, Airbnb, and TaskRabbit, have become a main enabler of economic growth and shared prosperity in modern-day societies. However, the complex exchange of goods, services, and data that takes place over these apps frequently puts their end-users' privacy at risk. Privacy policies of DSE apps are provided to disclose how private user data is being collected and handled. However, in reality, such policies are verbose and difficult to understand, leaving DSE users vulnerable to privacy intrusive practices. To address these concerns, in this paper, we propose an automated approach for annotating privacy policies in the DSE market. Our approach identifies data collection claims in these policies and maps them to the quality features of their apps. Visual and textual annotations are then used to further explain and justify these claims. The proposed approach is evaluated with 18 DSE app users. The results show that annotating privacy policies can significantly enhance their comprehensibility to the average DSE user. Our findings are intended to help DSE app developers to draft more comprehensible privacy policies as well as help their end-users to make more informed decisions in one of the fastest growing software ecosystems in the world

    Target Coordinates Estimation by Passive Radar with a Single non-Cooperative Transmitter and a Single Receiver

    Get PDF
    Passive radar is a bistatic radar that detects and tracks targets by processing reflections from non-cooperative transmitters. Due to the bistatic geometry for this radar, a target can be localized in Cartesian coordinates by using one of the following bistatic geometries: multiple non-cooperative transmitters and a single receiver, or a single non-cooperative transmitter and multiple receivers, whereas the diversity of receivers or non-cooperative transmitters leads to extra signal processing and a ghost target phenomenon. To mitigate these two disadvantages, we present a new method to estimate Cartesian coordinates of a target by a passive radar system with a single non-cooperative transmitter and a single receiver. This method depends on the ability of the radar receiver to analyze a signal-to-noise ratio (SNR) and estimate two arrival angles for the target’s echo signal. The proposed passive radar system is simulated with a Digital Video Broadcasting-Terrestrial (DVB-T) transmitter, and the simulation results show the efficiency of this system compared with results of other researches

    Internal Medicine Residency Newsletter, April 2023

    Get PDF
    A newsletter created by and for the Rochester General Hospital Internal Medicine Residency Program. This issue: A word from the PD News & announcements Resident corner Awards and scholarship Conferences and deadlines Introducing IMRC Article of the month Community outreach activities Happy birthday! Crossword puzzl
    • …
    corecore